2. Reading Input Using the Touch Gestures
TouchPanel.GetState returns simple
information about how the user is touching the screen; in many cases,
this information will be perfectly sufficient for games that you might
want to write. TouchPanel offers an alternative high-level way to read input, however, called Gestures.
The Gestures API recognizes a series of common
movement types that the user might make when touching the screen and
reports them back by telling you what type of movement has been detected
as well as the relevant screen coordinates for the movement.
The recognized gestures are as follows:
Tap: The user has briefly pressed and released contact with the screen.
DoubleTap: The user has quickly tapped the screen twice in the same location.
Hold: The user has made sustained contact with the same point on the screen for a small period of time.
VerticalDrag: The user is holding contact with the screen and moving the touch point vertically.
HorizontalDrag: The user is holding contact with the screen and moving the touch point horizontally.
FreeDrag: The user is holding contact and moving the touch point around the screen in any direction.
DragComplete: Indicates that a previous VerticalDrag, HorizontalDrag, or FreeDrag has concluded, and contact with the screen has been released.
Flick: The user has moved a contact point across the screen and released contact while still moving.
Pinch: Two simultaneous touch points have been established and moved on the screen.
PinchComplete: Indicates that a previous Pinch has concluded, and contact with the screen has been released.
This list contains some very useful input mechanisms
that your user will no doubt be familiar with. Using them saves a lot of
effort tracking previous positions and touch points, and allows the
gestures system to do all the work for us.
NOTE
The Gestures API is currently implemented only in
XNA for Windows Phone. If you intend to port your game to the Windows
or Xbox 360 platforms in the future, you will need to find an
alternative way of reading input for those platforms.
Let's look at the Gestures API and each of the
supported gesture types in more detail and see exactly how they all work
and how they can be used.
2.1. Enabling the Gestures
Before you can use gestures you must tell XNA which
of the gestures you are interested in being notified about. It is
potentially able to track all of them at once, but it is likely that
certain gestures are going to be unwanted in any given situation.
Enabling only those gestures that you need improves the performance of
the gesture recognition engine and also reduces the chance that a
gesture will not be interpreted in the way that you need.
All the gestures are disabled by default. Attempting to read gesture information in this state will result in an exception.
To enable the appropriate gestures, logically OR together the required values from the GestureType enumeration and then provide the result to the TouchPanel.EnabledGestures property. For example, the code in Listing 3 enables the tap, hold, and free drag gestures.
Example 3. Enabling gestures required for the game
// Enable the gestures that we want to be able to respond to
TouchPanel.EnabledGestures = GestureType.Tap | GestureType. Hold |
GestureType.FreeDrag;
|
The enabled gestures can be set or changed at any
stage in your game. If you find that you are moving from the main game
into a different area of functionality (such as an options screen or a
high-score table) and you need to change the gestures that are to be
processed, simply reassign the EnabledGestures property as needed.
2.2. Processing Gestures
Once the required gestures have been enabled, you can begin waiting for them to occur in your game's Update
function. Unlike reading the raw touch data, gesture information is fed
via a queue, and it is important that this queue is fully processed and
emptied each update. Without this it is possible for old events to be
picked up and processed some time after they actually took place, giving
your game a slow and laggy sensation.
To check to see whether there are any gestures in the queue, query the TouchPanel.IsGestureAvailable property. This can be used as part of a while loop to ensure that all waiting gesture objects within the queue are processed.
If IsGestureAvailable returns true, the next gesture can be read (and removed) from the queue by calling the TouchPanel.ReadGesture function. This returns a GestureSample
object containing all the required details about the gesture. Some of
the useful properties of this object include the following:
GestureType: This property indicates
which of the enabled gestures has resulted in the creation of this
object. It will contain a value from the same GestureType enumeration that was used to enable the gestures, and can be checked with a switch statement or similar construct to process each gesture in the appropriate way.
Position: A Vector2 that contains the location on the screen at which the gesture occurred.
Position2: For the Pinch gesture, this property contains the position of the second touch point.
Delta: A Vector2 containing the distance that the touch point has moved since the gesture was last measured.
Delta2: For the Pinch gesture, this property contains the delta of the second touch point.
A typical loop to process the gesture queue might look something like the code shown in Listing 4.
Example 4. Processing and clearing the gestures queue
while (TouchPanel.IsGestureAvailable)
{
// Read the next gesture
GestureSample gesture = TouchPanel.ReadGesture();
switch (gesture.GestureType)
{
case GestureType.Tap: Shoot(gesture.Position); break;
case GestureType.FreeDrag: Move(gesture.Position); break;
}
}
|
2.3. Tap and DoubleTap
The Tap gesture fires when you briefly touch and release the screen without moving the touch point. The DoubleTap
gesture fires when you quickly touch, release, and then touch the
screen again without any movement taking place. If both of these
gestures are enabled, a Tap and DoubleTap gesture will be reported in quick succession.
Note that repeat rapid taps of the screen are not
quite as responsive through the Gestures API as they are by reading the
raw touch information. If you need to be very responsive to lots of
individual screen taps, you might find raw touch data more appropriate.
2.4. Hold
The Hold gesture fires after stationary contact has been maintained for a brief period of time (about a second).
If the touch point moves too far from the initial
contact position, the hold gesture will not fire. This means that,
although it is quite possible for a Hold to fire after a Tap or DoubleTap, it is less likely after one of the drag gestures.
2.5. VerticalDrag, HorizontalDrag and FreeDrag
The three drag gestures can be used independently or together, though using FreeDrag
at the same time as one of the axis-aligned drags can be awkward
because once XNA has decided the direction of movement, it doesn't
change. Beginning a horizontal drag and then moving vertically will
continue to be reported as a horizontal drag. For this reason, it is
generally better to stick to either axis-aligned drags or free drags,
but not mix the two.
In addition to reporting the position within the returned GestureSample object, XNA also returns the Delta
of the movement—the distance that the touch point has moved on the x
and y axes since the last measurement. This can be useful if you want to
scroll objects on the screen because it is generally more useful than
the actual touch position itself. For VerticalDrag and HorizontalDrag, only the relevant axis value of the Delta structure will be populated; the other axis value will always contain 0.
Once a drag has started, it will continually report
the touch position each time it moves. Unlike when reading raw input, no
gesture data will be added to the queue if the touch point is
stationary. When the touch point is released and the drag terminates, a DragComplete gesture type will be reported.
2.6. Flick
Flick gestures are triggered when the user
releases contact with the screen while still moving the touch point.
This tends to be useful for initiating kinetic scrolling,
in which objects continue moving after touch is released in the
direction that the user had been moving.
To tell how fast and in which direction the flick occurred, read the GestureSample.Delta
property. Unlike drag gestures, however, this property contains the
movement distance for each axis measured in pixels per second, rather
than pixels since the previous position measurement.
To scale this to pixels per update to retain the existing motion, we can multiply the Delta vector by the length of time of each update, which we can retrieve from the TargetElapsedTime property. The scaled delta value calculation is shown in Listing 5.
Example 5. Scaling the Flick delta to represent pixels-per-Update rather than pixels-per-second
Vector2 deltaPerUpdate = gesture.Delta * (float)TargetElapsedTime.TotalSeconds;
|
One piece of information that we unfortunately do not get from the Flick
gesture is the position from which it is being flicked, which is
instead always returned as the coordinate (0, 0). To determine where the
flick originated, we therefore need to remember the position of a
previous gesture, and the only gestures that will reliably provide this
information are the drag gestures. It is therefore likely that you will
need to have a drag gesture enabled for this purpose.
2.7. Pinch
When the user makes contact with the screen with two fingers at once, a Pinch
gesture will be initiated and will report on the position of both touch
points for the duration of the contact with the screen. As with the
drag gestures, updates will be provided only if one or both of the touch
points has actually moved.
XNA will ensure that the same point is reported in each of its position and delta properties (Position, Position2, Delta, and Delta2), so you don't need to worry about them swapping over unexpectedly.
Once either of the contacts with the screen ends, a PinchComplete
gesture is added to the queue to indicate that no further updates from
this gesture will be sent. If the remaining touch point continues to be
held, it will initiate a new gesture once it begins to move.
Just as with multitouch data from the raw touch API,
testing pinch gestures on the emulator is impossible unless you have a
suitable touch screen and Windows 7. This gesture is therefore best
tested on a real device.
2.8. Working with Rotated and Scaled Screens
Just as with the raw touch data coordinates,
positions from the Gestures API are automatically updated to match the
rotation and scaling that is active on the screen, so no special
processing is required if these features are in use.
2.9. Experimenting with the Gestures API
The GesturesDemo example projectwill help you experiment with all the gestures we have discussed in this section. It is similar to the TouchPanelDemo from the previous section, but uses different icons for each of the recognized gestures. The icons are shown in Figure 1.
NOTE
This project deliberately displays the icons a
little above and to the left of the actual touch point so that they can
be seen when touching a real phone (otherwise they appear directly
beneath your fingers and are impossible to see). This looks a little odd
in the emulator, however, as their positions don't directly correspond
to the mouse cursor position, so don't be surprised by this.
By default, the project is set to recognize the Tap, DoubleTap, FreeDrag, Flick and Hold
gestures. Try enabling and disabling each of the gesture types and
experiment with the movement patterns needed to initiate each. You can
also use this as a simple way to see how the gestures relate to one
another (for example, try enabling all three of the drag gestures and
see how XNA decides which one to use).